2023-11-24 12:00:30.AIbase.3.5k
Oxford University AI Researcher Warns: Large Language Models Pose Risks to Scientific Truth
An AI researcher from Oxford University has pointed out that large language models (LLMs) may pose a threat to scientific integrity. The research calls for a change in the use of LLMs, suggesting they be used as 'zero-shot translators' to ensure factual accuracy in outputs. Relying on LLMs as a source of information could jeopardize scientific truth, leading to calls for responsible use of LLMs. The study warns that casual use of LLMs in scientific papers could result in significant harm.